-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(rethinkdb): OrganizationUser: Phase 2 #9953
Conversation
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
…n-phase3 Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
WalkthroughThe recent changes across various files primarily focus on improving the handling of user counts, subscription management, and billing within the organization's context. Key modifications include the removal of fields and properties related to Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 32
Outside diff range, codebase verification and nitpick comments (11)
packages/server/utils/getActiveDomainForOrgId.ts (1)
1-3
: Organise imports.Consider grouping the imports logically (external libraries, internal modules, etc.) to improve readability.
+import getKysely from '../postgres/getKysely' import {DataLoaderInstance} from '../dataloader/RootDataLoader'
packages/server/graphql/queries/helpers/countTiersForUserId.ts (2)
1-3
: Organise imports.Consider grouping the imports logically (external libraries, internal modules, etc.) to improve readability.
// breaking this out into its own helper so it can be used directly to // populate segment traits +import {DataLoaderInstance} from '../../../dataloader/RootDataLoader'
Line range hint
6-18
:
Ensure proper error handling.The function lacks error handling for data loading operations. Consider adding try-catch blocks to handle potential errors gracefully.
+try { const allOrgUsers = await dataLoader.get('organizationUsersByUserId').load(userId) const organizationUsers = allOrgUsers.filter(({inactive}) => !inactive) const tierStarterCount = organizationUsers.filter( (organizationUser) => organizationUser.tier === 'starter' ).length const tierTeamCount = organizationUsers.filter( (organizationUser) => organizationUser.tier === 'team' ).length const tierEnterpriseCount = organizationUsers.filter( (organizationUser) => organizationUser.tier === 'enterprise' ).length const tierTeamBillingLeaderCount = organizationUsers.filter( (organizationUser) => organizationUser.tier === 'team' && organizationUser.role === 'BILLING_LEADER' ).length return { tierStarterCount, tierTeamCount, tierEnterpriseCount, tierTeamBillingLeaderCount } +} catch (error) { + console.error('Failed to count tiers for userId:', userId, error) + throw error +}packages/server/utils/setTierForOrgUsers.ts (1)
Line range hint
1-3
:
Remove unused import.The
getRethink
import is no longer needed and should be removed to clean up the code.-import getRethink from '../database/rethinkDriver' import getKysely from '../postgres/getKysely'
packages/server/graphql/private/queries/suOrgCount.ts (1)
18-19
: Remove debug code before production deployment.The
console.log(pgResults)
statement should be removed or replaced with a proper logging mechanism before production deployment.- console.log(pgResults) + // console.log(pgResults) // Uncomment for debuggingpackages/server/graphql/private/queries/suProOrgInfo.ts (1)
35-35
: Remove debug code before production deployment.The
console.log({pgResults, activeOrgIds})
statement should be removed or replaced with a proper logging mechanism before production deployment.- console.log({pgResults, activeOrgIds}) + // console.log({pgResults, activeOrgIds}) // Uncomment for debuggingpackages/server/billing/helpers/handleEnterpriseOrgQuantityChanges.ts (1)
13-14
: Use consistent indentation for array elements.Ensure that the elements of the array passed to
Promise.all
are consistently indented for better readability.- const [orgUsers, subscriptionItem] = await Promise.all([ - dataLoader.get('organizationUsersByOrgId').load(orgId), - manager.getSubscriptionItem(stripeSubscriptionId) + const [orgUsers, subscriptionItem] = await Promise.all([ + dataLoader.get('organizationUsersByOrgId').load(orgId), + manager.getSubscriptionItem(stripeSubscriptionId) ])packages/server/postgres/migrations/1720556055134_OrganizationUser-phase1.ts (2)
36-38
: Consider index naming conventions.Consider following a consistent naming convention for indexes to improve readability and maintainability.
- CREATE INDEX IF NOT EXISTS "idx_OrganizationUser_tier" ON "OrganizationUser"("tier") WHERE "removedAt" IS NULL AND "inactive" = FALSE; - CREATE INDEX IF NOT EXISTS "idx_OrganizationUser_orgId" ON "OrganizationUser"("orgId") WHERE "removedAt" IS NULL; - CREATE INDEX IF NOT EXISTS "idx_OrganizationUser_userId" ON "OrganizationUser"("userId") WHERE "removedAt" IS NULL; + CREATE INDEX IF NOT EXISTS "organization_user_tier_idx" ON "OrganizationUser"("tier") WHERE "removedAt" IS NULL AND "inactive" = FALSE; + CREATE INDEX IF NOT EXISTS "organization_user_org_id_idx" ON "OrganizationUser"("orgId") WHERE "removedAt" IS NULL; + CREATE INDEX IF NOT EXISTS "organization_user_user_id_idx" ON "OrganizationUser"("userId") WHERE "removedAt" IS NULL;
49-51
: Ensure proper cleanup indown
function.The
down
function in1720556055134_OrganizationUser-phase1.ts
does not handle dropping indexes or constraints associated with theOrganizationUser
table. Please ensure that all associated indexes and constraints are dropped to complete the cleanup process.
- Add
DROP INDEX
andDROP CONSTRAINT
statements for any indexes and constraints related to theOrganizationUser
table.Analysis chain
Ensure proper cleanup in
down
function.Ensure that all associated indexes and constraints are dropped in the
down
function.
Let's search the migration files for any
DROP INDEX
,DROP CONSTRAINT
, or similar statements to ensure proper cleanup in thedown
function.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that all associated indexes and constraints are dropped in the `down` function. # Test: Check the table schema after running the `down` function to ensure all indexes and constraints are removed. psql -c '\d+ "OrganizationUser"'Length of output: 106
Script:
#!/bin/bash # Search for DROP INDEX, DROP CONSTRAINT, or similar statements in the migration files to ensure proper cleanup. rg 'DROP INDEX|DROP CONSTRAINT|DROP SEQUENCE|DROP TRIGGER' --glob '**/migrations/*.ts'Length of output: 8815
packages/server/postgres/migrations/1720640784862_OrganizationUser-phase2.ts (1)
43-44
: Initialise cursor valuesEnsure that the cursor values
curjoinedAt
andcurId
are initialized to appropriate minimum values.packages/server/dataloader/__tests__/isOrgVerified.test.ts (1)
80-81
: Remove redundant null checks.The
removedAt
field is already being set tonull
if not provided, making the null checks redundant. Consider removing them for cleaner code.- inactive: member.inactive ?? false, - role: member.role ?? null, - removedAt: member.removedAt ?? null, + inactive: member.inactive ?? false, + role: member.role ?? null, + removedAt: member.removedAt,
const orgUsers = await dataLoader.get('organizationUsersByOrgId').load(orgId) | ||
const billingLeaderOrgUsers = orgUsers.filter( | ||
({role}) => role && ['BILLING_LEADER', 'ORG_ADMIN'].includes(role) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure the role field is always present.
The filter condition assumes that the role
field is always present in the organization user object. Consider adding a null check or default value to avoid potential runtime errors.
- const billingLeaderOrgUsers = orgUsers.filter(
- ({role}) => role && ['BILLING_LEADER', 'ORG_ADMIN'].includes(role)
- )
+ const billingLeaderOrgUsers = orgUsers.filter(
+ (user) => user.role && ['BILLING_LEADER', 'ORG_ADMIN'].includes(user.role)
+ )
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const orgUsers = await dataLoader.get('organizationUsersByOrgId').load(orgId) | |
const billingLeaderOrgUsers = orgUsers.filter( | |
({role}) => role && ['BILLING_LEADER', 'ORG_ADMIN'].includes(role) | |
) | |
const orgUsers = await dataLoader.get('organizationUsersByOrgId').load(orgId) | |
const billingLeaderOrgUsers = orgUsers.filter( | |
(user) => user.role && ['BILLING_LEADER', 'ORG_ADMIN'].includes(user.role) | |
) |
.nth(0) | ||
.default(null) | ||
.run(), | ||
dataLoader.get('organizationUsersByUserIdOrgId').load({orgId, userId}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure proper error handling for load
method.
The dataLoader.get('organizationUsersByUserIdOrgId').load
method might throw an error if the data loading fails. Consider adding error handling to manage this scenario gracefully.
- dataLoader.get('organizationUsersByUserIdOrgId').load({orgId, userId}),
+ dataLoader.get('organizationUsersByUserIdOrgId').load({orgId, userId}).catch((error) => {
+ Logger.log(error)
+ return null
+ }),
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dataLoader.get('organizationUsersByUserIdOrgId').load({orgId, userId}), | |
dataLoader.get('organizationUsersByUserIdOrgId').load({orgId, userId}).catch((error) => { | |
Logger.log(error) | |
return null | |
}), |
@@ -40,7 +40,7 @@ export default { | |||
// RESOLUTION | |||
const {stripeId} = await dataLoader.get('organizations').loadNonNull(orgId) | |||
const dbAfter = after ? new Date(after) : r.maxval | |||
const [tooManyInvoices, orgUserCount] = await Promise.all([ | |||
const [tooManyInvoices, orgUsers] = await Promise.all([ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace RethinkDB maxval with Date object
Using r.maxval
is specific to RethinkDB and should be replaced with an appropriate PostgreSQL compatible value, such as new Date(8640000000000000)
(the maximum date value in JavaScript).
- const dbAfter = after ? new Date(after) : r.maxval
+ const dbAfter = after ? new Date(after) : new Date(8640000000000000)
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const [tooManyInvoices, orgUsers] = await Promise.all([ | |
const [tooManyInvoices, orgUsers] = await Promise.all([ | |
const dbAfter = after ? new Date(after) : new Date(8640000000000000) |
await pg | ||
.with('Org', (qc) => qc.insertInto('Organization').values(org)) | ||
.insertInto('OrganizationUser') | ||
.values(orgUsers) | ||
.execute() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure atomicity of database operations.
The PostgreSQL insert operation and the RethinkDB insert operation should be wrapped in a transaction to ensure atomicity and consistency.
- await pg
- .with('Org', (qc) => qc.insertInto('Organization').values(org))
- .insertInto('OrganizationUser')
- .values(orgUsers)
- .execute()
+ await pg.transaction().execute(async (trx) => {
+ await trx
+ .with('Org', (qc) => qc.insertInto('Organization').values(org))
+ .insertInto('OrganizationUser')
+ .values(orgUsers)
+ .execute()
+ })
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
await pg | |
.with('Org', (qc) => qc.insertInto('Organization').values(org)) | |
.insertInto('OrganizationUser') | |
.values(orgUsers) | |
.execute() | |
await pg.transaction().execute(async (trx) => { | |
await trx | |
.with('Org', (qc) => qc.insertInto('Organization').values(org)) | |
.insertInto('OrganizationUser') | |
.values(orgUsers) | |
.execute() | |
}) |
const handleInactive = async (userId: string, dataLoader: DataLoaderWorker) => { | ||
const orgUsers = await dataLoader.get('organizationUsersByUserId').load(userId) | ||
const orgIds = orgUsers.map(({orgId}) => orgId) | ||
await adjustUserCount(userId, orgIds, InvoiceItemType.UNPAUSE_USER, dataLoader).catch(Logger.log) | ||
// TODO: re-identify | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure proper error handling and address TODO.
The error handling with Logger.log
is appropriate. Address the TODO
comment to re-identify the user.
Do you want me to address the TODO and implement the re-identify logic, or open a GitHub issue to track this task?
getKysely() | ||
.updateTable('OrganizationUser') | ||
.set({inactive}) | ||
.where('userId', '=', userId) | ||
.where('removedAt', 'is', null) | ||
.execute(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redundant RethinkDB update operations.
The update operations for OrganizationUser
can be refactored to use only PostgreSQL, aligning with the rest of the migration.
- getKysely()
- .updateTable('OrganizationUser')
- .set({inactive})
- .where('userId', '=', userId)
- .where('removedAt', 'is', null)
- .execute(),
- r
- .table('OrganizationUser')
- .getAll(userId, {index: 'userId'})
- .filter({removedAt: null})
- .update({inactive})
- .run()
+ getKysely()
+ .updateTable('OrganizationUser')
+ .set({inactive})
+ .where('userId', '=', userId)
+ .where('removedAt', 'is', null)
+ .execute()
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
getKysely() | |
.updateTable('OrganizationUser') | |
.set({inactive}) | |
.where('userId', '=', userId) | |
.where('removedAt', 'is', null) | |
.execute(), | |
getKysely() | |
.updateTable('OrganizationUser') | |
.set({inactive}) | |
.where('userId', '=', userId) | |
.where('removedAt', 'is', null) | |
.execute() |
// TEST in Phase 2! | ||
console.log(pgResults) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debug logging.
Remove the debug logging statement before merging the code.
- console.log(pgResults)
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// TEST in Phase 2! | |
console.log(pgResults) | |
// TEST in Phase 2! | |
@@ -11,7 +12,7 @@ | |||
{dataLoader} | |||
) => { | |||
const r = await getRethink() | |||
|
|||
const pg = getKysely() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure proper error handling for database connection.
Ensure that the database connection is properly handled and closed in case of errors.
+ try {
+ const pg = getKysely()
+ } catch (error) {
+ console.error('Error initializing database connection:', error)
+ throw new Error('Database connection failed')
+ }
Committable suggestion was skipped due to low confidence.
@@ -51,7 +51,7 @@ export async function checkTableEq( | |||
.run()) as RethinkDoc[] | |||
if (rethinkRows.length === 0) break | |||
const lastRow = rethinkRows[rethinkRows.length - 1]! | |||
curUpdatedDate = lastRow.updatedAt | |||
curUpdatedDate = lastRow.joinedAt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optimise the loop condition.
The loop condition for (let i = 0; i < 1e6; i++)
is arbitrary. Consider optimising it based on actual data size.
- for (let i = 0; i < 1e6; i++) {
+ while (true) {
Committable suggestion was skipped due to low confidence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using a while true loop would open us up to a threat of an infinite loop, right? isn't that worse than an arbitrarily big for loop?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mattkrick, you are correct that using a while (true)
loop without a proper exit condition can lead to an infinite loop, which is indeed worse than an arbitrarily large for
loop. A more robust approach would involve setting a logical exit condition based on the actual data size or another relevant criterion. This way, we can avoid the risk of an infinite loop while also not relying on an arbitrary large number.
Would you like me to propose a specific solution or gather more information on the data size to determine a better exit condition?
const pgResults = await pg | ||
.selectFrom('OrganizationUser') | ||
.select(({fn}) => ['userId', fn.agg<string[]>('array_agg', ['orgId']).as('orgIds')]) | ||
.where('userId', 'in', userIdBatch) | ||
.where('removedAt', 'is', null) | ||
.groupBy('userId') | ||
.execute() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optimise query for fetching user IDs.
Consider optimising the query for fetching user IDs to improve performance.
- const pgResults = await pg
- .selectFrom('OrganizationUser')
- .select(({fn}) => ['userId', fn.agg<string[]>('array_agg', ['orgId']).as('orgIds')])
- .where('userId', 'in', userIdBatch)
- .where('removedAt', 'is', null)
- .groupBy('userId')
- .execute()
+ const pgResults = await pg
+ .selectFrom('OrganizationUser')
+ .select(['userId', pg.raw('array_agg("orgId") AS "orgIds"')])
+ .where('userId', 'in', userIdBatch)
+ .where('removedAt', 'is', null)
+ .groupBy('userId')
+ .execute()
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const pgResults = await pg | |
.selectFrom('OrganizationUser') | |
.select(({fn}) => ['userId', fn.agg<string[]>('array_agg', ['orgId']).as('orgIds')]) | |
.where('userId', 'in', userIdBatch) | |
.where('removedAt', 'is', null) | |
.groupBy('userId') | |
.execute() | |
const pgResults = await pg | |
.selectFrom('OrganizationUser') | |
.select(['userId', pg.raw('array_agg("orgId") AS "orgIds"')]) | |
.where('userId', 'in', userIdBatch) | |
.where('removedAt', 'is', null) | |
.groupBy('userId') | |
.execute() |
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
This PR exceeds the recommended size of 1000 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR will be delayed and might be rejected due to its size. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range, codebase verification and nitpick comments (1)
packages/server/graphql/public/mutations/setOrgUserRole.ts (1)
98-98
: Ensure consistency in role updates between PostgreSQL and RethinkDB.The role updates in the
setOrgUserRole
function are performed sequentially without ensuring atomicity across the two databases. If one update fails while the other succeeds, it can lead to inconsistencies.
- The function updates the role in PostgreSQL first and then in RethinkDB.
- There is no rollback mechanism to handle failures in one database after a successful update in the other.
Consider implementing a transaction management strategy or a rollback mechanism to ensure both updates succeed or fail together.
Analysis chain
Ensure consistency in role updates between PostgreSQL and RethinkDB.
The role update in PostgreSQL should be consistent with the role update in RethinkDB. Verify that both updates succeed or fail together.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify consistency in role updates between PostgreSQL and RethinkDB. # Test: Ensure the role updates succeed or fail together. rg --type ts 'setOrgUserRole'Length of output: 1044
Script:
#!/bin/bash # Searching for the implementation of the setOrgUserRole function to verify consistency in role updates between PostgreSQL and RethinkDB. ast-grep --lang typescript --pattern $'const setOrgUserRole: MutationResolvers[\'setOrgUserRole\'] = async ($_$) => { $$$ }' packages/server/graphql/public/mutations/setOrgUserRole.tsLength of output: 188
Script:
#!/bin/bash # Searching for the implementation of the setOrgUserRole function and including context lines to inspect the implementation details. rg 'const setOrgUserRole' -A 50 packages/server/graphql/public/mutations/setOrgUserRole.tsLength of output: 1928
Script:
#!/bin/bash # Extending the context lines to capture the complete implementation of the setOrgUserRole function. rg 'const setOrgUserRole' -A 100 packages/server/graphql/public/mutations/setOrgUserRole.tsLength of output: 3226
Description
Migrates old data over to PG.
About 180 rows didn't make it over because the underlying user did not exist in the DB
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Tests